achieving multimodal cohesion during intercultural conversations

نویسندگان

mei-ya liang

national central university, taiwan

چکیده

how do english as a lingua franca (elf) speakers achieve multimodal cohesion on the basis of their specific interests and cultural backgrounds? from a dialogic and collaborative view of communication, this study focuses on how verbal and nonverbal modes cohere together during intercultural conversations. the data include approximately 160-minute transcribed video recordings of elf interactions with 4 groups of university students who engaged in the following two classroom tasks: responding to a film excerpt and a music video. the results showed that individual participants engaged in the processes of initiation and response to support or challenge one another using a range of communication strategies. the results further indicated that during the discursive activities, the small groups achieved multimodal cohesion by deploying specific embodied resources in four types of participation structure: (1) interlock, (2) unison, (3) plurality and (4) dominance. future research may broaden our understanding of the embodied interaction that is involved in intercultural conversation.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Achieving Multimodal Cohesion during Intercultural Conversations

How do English as a lingua franca (ELF) speakers achieve multimodal cohesion on the basis of their specific interests and cultural backgrounds? From a dialogic and collaborative view of communication, this study focuses on how verbal and nonverbal modes cohere together during intercultural conversations. The data include approximately 160-minute transcribed video recordings of ELF interactions ...

متن کامل

Multimodal Intercultural Information and Communication Technology - A Framework for Designing and Evaluating Multimodal Intercultural Communicators

The paper presents a framework, combined with a checklist for designing and evaluating multimodal, intercultural ICT, especially when embodied artificial communicators are used as front ends for data bases, as digital assistants, as tutors in pedagogical programs or players in games etc. Such a framework is of increasing interest, since the use of ICT across cultural boundaries in combination w...

متن کامل

Predicting Subjectivity in Multimodal Conversations

In this research we aim to detect subjective sentences in multimodal conversations. We introduce a novel technique wherein subjective patterns are learned from both labeled and unlabeled data, using n-gram word sequences with varying levels of lexical instantiation. Applying this technique to meeting speech and email conversations, we gain significant improvement over state-of-the-art approache...

متن کامل

Some challenges for multimodal intercultural artificial communicators

This paper discusses some of the challenges in constructing multimodal intercultural artificial communicators. Such challenges concern the notions of intercultural communication and culture, e.g. what aspects of culture are investigated and what methods are used. The paper also presents two example of types of investigation that are needed and concludes with a short discussion of some consequen...

متن کامل

Triggering Memories of Conversations using Multimodal Classifiers

Our personal conversation memory agent is a wearable ‘experience collection’ system, which unobtrusively records the wearer’s conversation, recognizes the face of the dialog partner and remembers his/her voice. When the system sees the same person’s face or hears the same voice it uses a summary of the last conversation with this person to remind the wearer. To correctly identify a person and h...

متن کامل

A Multimodal Corpus for Studying Dominance in Small Group Conversations

We present a new multimodal corpus with dominance annotations on small group conversations. We used five-minute non-overlapping slices from a subset of meetings selected from the popular Augmented Multi-party Interaction (AMI) corpus. The total length of the annotated corpus corresponds to 10 hours of meeting data. Each meeting is observed and assessed by three annotators according to their lev...

متن کامل

منابع من

با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید


عنوان ژورنال:
international journal of society, culture and language

جلد ۴، شماره ۲، صفحات ۵۵-۷۰

میزبانی شده توسط پلتفرم ابری doprax.com

copyright © 2015-2023